7,433 research outputs found

    Censoring Representations with an Adversary

    Get PDF
    In practice, there are often explicit constraints on what representations or decisions are acceptable in an application of machine learning. For example it may be a legal requirement that a decision must not favour a particular group. Alternatively it can be that that representation of data must not have identifying information. We address these two related issues by learning flexible representations that minimize the capability of an adversarial critic. This adversary is trying to predict the relevant sensitive variable from the representation, and so minimizing the performance of the adversary ensures there is little or no information in the representation about the sensitive variable. We demonstrate this adversarial approach on two problems: making decisions free from discrimination and removing private information from images. We formulate the adversarial model as a minimax problem, and optimize that minimax objective using a stochastic gradient alternate min-max optimizer. We demonstrate the ability to provide discriminant free representations for standard test problems, and compare with previous state of the art methods for fairness, showing statistically significant improvement across most cases. The flexibility of this method is shown via a novel problem: removing annotations from images, from unaligned training examples of annotated and unannotated images, and with no a priori knowledge of the form of annotation provided to the model.Comment: Paper accepted to ICL

    Towards a Neural Statistician

    Get PDF
    An efficient learner is one who reuses what they already know to tackle a new problem. For a machine learner, this means understanding the similarities amongst datasets. In order to do this, one must take seriously the idea of working with datasets, rather than datapoints, as the key objects to model. Towards this goal, we demonstrate an extension of a variational autoencoder that can learn a method for computing representations, or statistics, of datasets in an unsupervised fashion. The network is trained to produce statistics that encapsulate a generative model for each dataset. Hence the network enables efficient learning from new datasets for both unsupervised and supervised tasks. We show that we are able to learn statistics that can be used for: clustering datasets, transferring generative models to new datasets, selecting representative samples of datasets and classifying previously unseen classes. We refer to our model as a neural statistician, and by this we mean a neural network that can learn to compute summary statistics of datasets without supervision.Comment: Updated to camera ready version for ICLR 201

    Independent reporting sonographers: Could other countries follow the UK's lead?

    Get PDF
    IntroductionFor many years, the majority of ultrasound examinations in the United Kingdom (UK), both obstetric and non-obstetric, have been performed by radiographers who have undergone postgraduate training. These sonographers scan, interpret and report their own examinations. Today, sonographer-led ultrasound services are an essential and well established part of diagnostic imaging departments. This model however, appears to be unique, with very little evidence of sonographers in countries outside the UK, appearing to offer a similar level of service.MethodsA literature review was undertaken to investigate the background to the evolution of independent reporting in the UK. An exploration of the variable sonography services in other parts of the world was initiated, to obtain some insight into whether the UK model is practised elsewhere. ResultsIn the UK a successful model for ultrasound services has been practised for almost thirty years, with sonographers performing and reporting on ultrasound examinations. This practice is evidence-based with studies showing that detection rates and accuracy for ultrasound examinations are similar for sonographers and radiologists. The importance of good working relations with radiology colleagues and rigorous education and training was apparent in the development of successful sonography reporting practices in the UK.No other country relies so heavily on sonographers. Throughout mainland Europe, physicians and general practitioners perform a significant proportion of ultrasound examinations. In other countries, sonographers may perform the scans but reporting remains primarily the domain of the overseeing medical staff. ConclusionTraditionally, in the UK sonographers work closely alongside radiologists and it is this team-working, along with escalating demand, which has helped lead to the success of the current model. Rigorous professional guidelines and training programmes for sonographers in the UK, have helped to ensure high standards of practice amongst sonographers.The escalating need for ultrasound services is now causing some physicians from other parts of the world to start focusing their attention on the UK model as a possible solution to meet demand. Looking to the future, it is anticipated that more sonographer-led ultrasound departments will start to emerge and independent reporting will become common practice for sonographers. In order to support this however, it is important that appropriate, rigorous training programmes are established, and those who aspire to be independent reporting sonographers will need to forge good working relationships with medical colleagues

    Is There Madness in the Method? Researching Flexibility in the Education of Adults

    Get PDF
    This paper explores the process of formulating research questions for an ongoing empirical study of conceptions of flexibility and lifelong learning in the context of further education in the UK. The process is represented in three parallel versions: an algorithmic tale, a tale of improvisations and a reflexive tale

    An Adaptive, Parallel Algorithm for Approximating the Generalized Voronoi Diagram

    Get PDF
    A Generalized Voronoi Diagram (GVD) partitions a space into regions based on the distance between arbitrarily-shaped objects. Each region contains exactly one object, and consists of all points closer to that object than any other. GVDs have applications in pathfinding, medical analysis, and simulation. Computing the GVD for many datasets is computationally intensive. Standard techniques rely on uniform gridding of the space, causing failure when the number of voxels becomes prohibitively large. Other techniques use adaptive space subdivision which avoid failure at the expense of efficiency. Unlike previous approaches, we are able to break up the construction of GVDs into novel work items. We then solve these items in parallel on graphics cards, improving performance. Using these techniques, GVD construction becomes much more efficient, practical, and applicable

    Learning from alternative sources of supervision

    Get PDF
    With the rise of the internet, data of many varieties including: images, audio, text and video are abundant. Unfortunately for a very specific task one might have, the data for that problem is not typically abundant unless you are lucky. Typically one might have only a small amount of labelled data, or only noisy labels, or labels for a different task, or perhaps a simulator and reward function but no demonstrations, or even a simulator but no reward function at all. However, arguably no task is truly novel and so it is often possible for neural networks to benefit from the abundant data that is related to your current task. This thesis documents three methods for learning from alternative sources of supervision, an alternative to the more preferable case of simply having unlimited amounts of direct examples of your task. Firstly we show how having data from many related tasks could be described with a simple graphical model and fit using a Variational-Autoencoder - directly modelling and representing the relations amongst tasks. Secondly we investigate various forms of prediction-based intrinsic rewards for agents in a simulator with no extrinsic rewards. Thirdly we introduce a novel intrinsic reward and investigate how to best combine it with an extrinsic reward for best performance
    • …
    corecore